902 research outputs found

    Characterizing object- and position-dependent response profiles to uni- and bilateral stimulus configurations in human higher visual cortex:a 7T fMRI study

    Get PDF
    Visual scenes are initially processed via segregated neural pathways dedicated to either of the two visual hemifields. Although higher-order visual areas are generally believed to utilize invariant object representations (abstracted away from features such as stimulus position), recent findings suggest they retain more spatial information than previously thought. Here, we assessed the nature of such higher-order object representations in human cortex using high-resolution fMRI at 7T, supported by corroborative 3T data. We show that multi-voxel activation patterns in both the contra- and ipsilateral hemisphere can be exploited to successfully classify the object category of unilaterally presented stimuli. Moreover, robustly identified rank order-based response profiles demonstrated a strong contralateral bias which frequently outweighed object category preferences. Finally, we contrasted different combinatorial operations to predict the responses during bilateral stimulation conditions based on responses to their constituent unilateral elements. Results favored a max operation predominantly reflecting the contralateral stimuli. The current findings extend previous work by showing that configuration-dependent modulations in higher-order visual cortex responses as observed in single unit activity have a counterpart in human neural population coding. They furthermore corroborate the emerging view that position coding is a fundamental functional characteristic of ventral visual stream processing

    Contextual Encoder-Decoder Network for Visual Saliency Prediction

    Get PDF
    Predicting salient regions in natural images requires the detection of objects that are present in a scene. To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information. However, existing models aimed at explaining human fixation maps do not incorporate such a mechanism explicitly. Here we propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task. The architecture forms an encoder-decoder structure and includes a module with multiple convolutional layers at different dilation rates to capture multi-scale features in parallel. Moreover, we combine the resulting representations with global scene information for accurately predicting visual saliency. Our model achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and we demonstrate the effectiveness of the suggested approach on five datasets and selected examples. Compared to state of the art approaches, the network is based on a lightweight image classification backbone and hence presents a suitable choice for applications with limited computational resources, such as (virtual) robotic systems, to estimate human fixations across complex natural scenes.Comment: Accepted Manuscrip

    Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    Get PDF
    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex

    A multiplex connectivity map of valence-arousal emotional model

    Get PDF
    high number of studies have already demonstrated an electroencephalography (EEG)-based emotion recognition system with moderate results. Emotions are classified into discrete and dimensional models. We focused on the latter that incorporates valence and arousal dimensions. The mainstream methodology is the extraction of univariate measures derived from EEG activity from various frequencies classifying trials into low/high valence and arousal levels. Here, we evaluated brain connectivity within and between brain frequencies under the multiplexity framework. We analyzed an EEG database called DEAP that contains EEG responses to video stimuli and users’ emotional self-assessments. We adopted a dynamic functional connectivity analysis under the notion of our dominant coupling model (DoCM). DoCM detects the dominant coupling mode per pair of EEG sensors, which can be either within frequencies coupling (intra) or between frequencies coupling (cross-frequency). DoCM revealed an integrated dynamic functional connectivity graph (IDFCG) that keeps both the strength and the preferred dominant coupling mode. We aimed to create a connectomic mapping of valence-arousal map via employing features derive from IDFCG. Our results outperformed previous findings succeeding to predict in a high accuracy participants’ ratings in valence and arousal dimensions based on a flexibility index of dominant coupling modes

    Playing Charades in the fMRI: Are Mirror and/or Mentalizing Areas Involved in Gestural Communication?

    Get PDF
    Communication is an important aspect of human life, allowing us to powerfully coordinate our behaviour with that of others. Boiled down to its mere essentials, communication entails transferring a mental content from one brain to another. Spoken language obviously plays an important role in communication between human individuals. Manual gestures however often aid the semantic interpretation of the spoken message, and gestures may have played a central role in the earlier evolution of communication. Here we used the social game of charades to investigate the neural basis of gestural communication by having participants produce and interpret meaningful gestures while their brain activity was measured using functional magnetic resonance imaging. While participants decoded observed gestures, the putative mirror neuron system (pMNS: premotor, parietal and posterior mid-temporal cortex), associated with motor simulation, and the temporo-parietal junction (TPJ), associated with mentalizing and agency attribution, were significantly recruited. Of these areas only the pMNS was recruited during the production of gestures. This suggests that gestural communication relies on a combination of simulation and, during decoding, mentalizing/agency attribution brain areas. Comparing the decoding of gestures with a condition in which participants viewed the same gestures with an instruction not to interpret the gestures showed that although parts of the pMNS responded more strongly during active decoding, most of the pMNS and the TPJ did not show such significant task effects. This suggests that the mere observation of gestures recruits most of the system involved in voluntary interpretation

    Impaired Face Discrimination in Acquired Prosopagnosia Is Associated with Abnormal Response to Individual Faces in the Right Middle Fusiform Gyrus

    Get PDF
    The middle fusiform gyrus (MFG) and the inferior occipital gyrus (IOG) are activated by both detection and identification of faces. Paradoxically, patients with acquired prosopagnosia following lesions to either of these regions in the right hemisphere cannot identify faces, but can still detect faces. Here we acquired functional magnetic resonance imaging (fMRI) data during face processing in a patient presenting a specific deficit in individual face recognition, following lesions encompassing the right IOG. Using an adaptation paradigm we show that the fMRI signal in the rMFG of the patient, while being larger in response to faces as compared to objects, does not differ between conditions presenting identical and distinct faces, in contrast to the larger response to distinct faces observed in controls. These results suggest that individual discrimination of faces critically depends on the integrity of both the rMFG and the rIOG, which may interact through re-entrant cortical connections in the normal brai
    corecore